perm filename CHAPTE.MSS[RDG,DBL]2 blob sn#660348 filedate 1982-05-26 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00020 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	@Part<Chapters, root "NAIVE.MSS[rdg,dbl]">
C00005 00003	@Chapter<Introduction>
C00024 00004	@Chapter<Categories of Analogies>
C00028 00005	@BEGIN<Enumerate>
C00033 00006	@Chapter(Preliminary Observations)
C00041 00007	@Section<Different Senses>
C00044 00008	@Section(Types of Tasks)
C00047 00009	@Section<Analogy Primitives>
C00053 00010	@Section(Analogy Questions)
C00057 00011	@P(<Tasks which require the analogy to be generated.>)
C00065 00012	@Chapter(Analogy Applications)
C00074 00013	@Section(Elaboration of Applications)
C00085 00014	@Chapter("Dimensions" of Analogy)
C00097 00015	@Section(Other Dimensions)
C00109 00016	@Section(Dimensions of Analogy PROBLEM)
C00117 00017	@Section(Dimensions of Metaphoric use of Analogy/PROBLEM)
C00123 00018	@Chapter(Properties of Analogy)
C00132 00019	@Chapter(Conclusion)
C00135 00020	@NewPage @MAJORHEADING<Acknowledgements>
C00138 ENDMK
C⊗;
@Part<Chapters, root "NAIVE.MSS[rdg,dbl]">
@BEGIN<TitlePage>
@TitleBox{
@MajorHeading[What's an Analogy Like?]}

@i{@Value<Date>}

@Heading{Russell Greiner}
@B[Heuristic Programming Project
Computer Science Department
Stanford University]

@BEGIN[TEXT, Indent 0]
@C<Abstract>: @i{This survey
presents a preliminary catalogue of the many different
forms, senses and uses of analogies.
An extensive list of examples of analogies is included 
to motivate and justify these "dimensions".
Additional appendices discuss other related issues --
including the connection of models and reformulation to analogy.}
@END[TEXT]

@END<TitlePage>
@Chapter<Introduction>

The primary goal of this research project is a computer program 
capable of 
@Comment{generating,}
understanding and using an analogy.
One obvious prerequisite for this task
is a solid understanding of just what an analogy is.
Despite the reams written on this topic, there seems no adequate
(and certainly no universally accepted,)
definition, nor even a complete set of characteristics.
This available literature shows, instead, that each person uses his own
(usually unstated) sense of "analogy".

Here is another stab into this morass.
This paper follows the lead established by @Cite[NaivePhysics],
@Comment{ (as well as @Cite[Isaac] and @Cite[Envisage],) }
in presenting a "naive view" of analogy --
a semi-formal specfication of the space of analogies.
Our eventual goal is a description of the properties of this space;
this paper concludes with a first approximation to this,
motivated and justified by the results presented in the earlier chapters.

We follow a fairly scruffy (see @Cite<Scruffy>) empirical methodology
to reach this crescendo.
First the data;
an overview of many different types of analogies is presented
in Chapter @Ref<ExampleOverview>.
(The actual examples are included in Appendix @Cite<Examples>.)
The analysis which follows will be based on these examples.
Their considerable breadth and diversity is also relevant;
demonstrating how pervasive, and hence important,
analogy is in many of our cognitive processes.
(It was this realization which motivated this whole line of research.)

Chapter @Ref(Categories) then presents an initial 
characterization of the space of analogies,
based on these examples.
It first suggests several different senses of the term "analogy",
and then outlines what types of tasks are considered "analogizing".
A spanning set of primitive operations are then abstracted from these tasks.
The end of the chapter sketches the types of questions which could be answered
using appropriate combinations of these primitives;
the comprehensivenss of this range
reinforces our view that these operations are sufficient.
Chapter @Ref<Applications> then pursues an orthogonal axis:
asking what are the different ways an analogy can be used.

Each of these chapters suggests some of the dimensions in the space of analogies.
Chapter @Ref<Dimensions> presents a more complete set of axes,
first reiterating these, and then describing additional ones.
This list provides a basis with which
to compare and contrast different types of analogies;
which is one of the main objectives of this paper.
The other purpose is a list of properties which characterize analogies --
many of these are listed in the final Chapter @Ref(Properties).

Each of the appendices elaborates some point(s) which the core text glossed over.
As mentioned above,
Appendix @Ref<Exampls> provides an extensive list of examples of analogies.
Next, Appendix @Ref(WhyAnalogy) addresses the question
"why analogies are so ubiquitous?".
It mentions some of the advantages deriving from the use of an analogy,
and then postulates some reasons why this ability may have evolved.

Appendix @Ref(Multiplicity) then rebuts the curiously common view that 
the "best possible analogy" depends only on the analogues themselves.
Clearly other characteristics of the situation
-- such as the people involved, and the purpose of the analogy --
are also important.
This appendix shows this by listing a number of situations
where many different analogies can join the same pair of analogues.

This multiplicity of analogies, in turn, motivates the discussion on reformulation
conducted in Appendix @Ref<Reform>.
In addition to defining this term and indicating how it relates to analogies,
this appendix also indicates why a reformulation step may be
neccessary to understand an arbitrary analogy.

Many of the standard English phrases which serve to specify an analogy
are presented in Appendix @Ref<Analogy-Vocab>.
Finally, Appendix @Ref<Misc> concludes this "core-dump",
listing additonal ideas which pertain (somehow) to analogy,
but which did not fit in to any other category.

Two final (meta-)notes:
First, this report will, hopefully, evolve into one
chapter of a much larger document 
-- the author's eventual thesis.
This may explain both the particular perspective presented,
and the miscellaneous unfulfilled pointers which appear throughout this paper.
Second, while this report attempts to provide a general description of the 
phenomenon of analogy,
its single authorhood means personal biases will undoubtedly appear,
potentially compromising its generality.
Comments are actively solicited,
particularly those which plug obvious gaps.

@Chapter<Categories of Analogies>
@Label<ExampleOverview>

This chapter will quickly outline some of the commonly understood
senses of analogy.
It was this diversity of meanings and uses which prompted this survey paper --
there was no prior listing of these meanings,
much less an coherent framework in which to arrange these diverse senses.

We feel any reasonable model of analogy 
must be able to cover (that is, be able to explain) this full range of examples.
The definition of analogy proposed later
is indeed capable of dealing with all of these cases.@Foot{
See SubAppendix @Ref<ReformOther>.
The justification of the completeness of this model appears in @Cite<ThesisProp>,
but not in this "What's in an Analogy" paper.}

One final note: 
This chapter provides little more than the title of each class,
and a brief description.
Appendix @Ref<Examples>, below, will follow this outline, providing much more
information about each case.

@BEGIN<Enumerate>
@BEGIN<Multiple>@B(Used for Comparisons)@*
@BEGIN(DESC1)

@i{Implicit question:} "Is A like B?" or "How is A like B?"

@i{Purpose:}@\The analogy establishes a connection between a pair of models,
designed to explain some fact(s) about B.

@i{Subcases:}@\Explicit comparison, simile, only one feature mapped over, equation
@END<DESC1>@END<Multiple>

@BEGIN<Multiple>@B<Used for Prediction>@*
@BEGIN(DESC1)

@i{Implicit question:} Given that A is like B, and P(A), is P'(B) true?

@i{Purpose:}@\The analogy is used to conjecture properties of one object,
based on facts about another.

Moderate in Size - Intrafield, Instance to Instance
@END<DESC1>@END<Multiple>

@BEGIN<Multiple>@B<Find or Produce an Analogue>@*
@BEGIN(DESC1)

@i{Implicit question:} Find or create a B which is like A in some manner.

@i{Purpose:}@\Construction of a new object with certain desired properties
(using the analogical mapping as an intentional definition).

@i{Subcases:}@\Problem Transformation
@END<DESC1>@END<Multiple>

@BEGIN<Multiple>@B<Problem Restatement>@*
@BEGIN(DESC1)

@i{Implicit question:} Find an alternative representation of problem A 
(which renders the problem easier to solve).@Foot{
It has been claimed that many of the significant advances science has made
have been via restructuring of known problems into tractable forms
(? Polya, Kuhn ?).}

@i{Purpose:}@\To solve problem A.@*
(One might also derive new insights into
a large range of similar problems as a side effect.)

@i{Subcases:}@\Fixed Algorithmic Transformations, Heuristic Methods
@END<DESC1>@END<Multiple>

@BEGIN<Multiple>@B<Literary uses to conjure images>@*
@BEGIN(DESC1)

@i{Implicit question:} The goal is to describe some feature, P(x), of a certain
situation, B.
Here it is usually achieved by finding a commonly known object, A,
which has a pronounced salient feature corresponding to this P.

@i{Purpose:}@\Suggesting that P(B) holds.  (Using the notation defined above.)

@i{Subcases:}@\Similarity Metaphor, Idiom, Literary Metaphor
@END<DESC1>@END<Multiple>

@BEGIN<Multiple>@B<Proportional Metaphors>@*
@BEGIN(DESC1)

@i{Implicit question:} Find @i{y} which is to @i{B} as @i{x} is to @i{A}.

@i{Purpose:}@\(Rapid) transference of facts/assertion to @i{y} based on
either @i{x} itself or its role, with respect to @i{A}.

@i{Subcases:}@\Explicit question, 
Extending a term from its "native" domain to a foreign one, Antiquation, 
Sub-conscious use of metaphors, Mathematics
@END<DESC1>@END<Multiple>

@BEGIN<Multiple>@B<Familial resemblance>@*
@BEGIN(DESC1)

@i{Implicit Question}: Given a set {A@-<i>}, find how they are all similar.

@i{Purpose:}@\This is used as an imprecise, but widely-used classification
method.

@i{Subcases:}@\(Justification of a) Classification, Issues of Style,
(Pseudo-)Class Membership
@END<DESC1>@END<Multiple>
@END<Enumerate>
@Chapter(Preliminary Observations)
@Label(Categories)

There are several observations we can make from these categories,
which are reinforced by the actual examples listed in Appendix @Ref<Examples>.
To a first approximation, the general issue is
@BEGIN[Quotation]
What does it mean to state that 
"A is like B (possibly, within the constraint @G(a))"?
@END[Quotation]
The quick answer is
@BEGIN[Quotation]
Two objects are analogous if they share some characteristic.
@END[Quotation]
Given this model,
simple feature matchings seems sufficient to explain analogy 
-- A is analogous to B whenever enough of A's properties match the
corresponding features of B.

What makes this task of analogy non-trivial is determining which set of features
to use for this comparison, given that there will be many distinct ways of
representing both A and B.
All feature spaces are not the same:
Appendices @Ref(Multiplicity) and @Ref(Reform) both point out that
the relevant analogy can be readily (@i{i.e.}, syntactically) found
in certain spaces, but not others.
(The overall thesis may be regarded as a collection of "tricks" useful for
finding an apt feature space 
-- one which explicates the desired, otherwise hidden features.)

This comparison also feels non-decompositional, in that
one does not need to explicitly specify the particular properties which
A and B share.
In this respect it subsumes to the general process of
"model-based inferencing"@Foot{
Not to be confused with "model-directed reasoning"...}
which philosophers discuss,
where a unfamiliar object is described using a model of a more familiar object.

The balance of this chapter presents additional specifications of analogy.

@Section<Different Senses>
@Label<Senses>
First, there are three, clearly distinct senses of the term "analogy".
(While we tried to leave the listing and our choice of apt headings
relatively unbiased,
this distinction was impossible to avoid.)
These case are:
@BEGIN(Itemize)
@BEGIN(DESC1)
@b{Similarity case}

@i{Given} objects A, B, and formula P(@G(u)),@*
where A satisfies P(@G(u));

@i{Find} a predicate P'(@G(u)) which is like P(@G(u)),
and which is satisfied by B.
@END(DESC1)

@BEGIN(DESC1)
@b<Proportional@Foot{
In fact, the word "analogy" comes from the Greek word for "proportion".}
case>

@i{Given} objects A, B, and x,@*
where x bears some relation (call it R(@G(u),@G(m)),)
to A -- that is, R(A,x));

@i{Find} an object y which bears a similar relation to B.@*
This requires finding a R'(@G(u),@G(m)) which is similar to R(@G(u),@G(m)),
of course.@Foot{
@Comment{ This does NOT yet work! @TAG(Unify) }
@Counter(Unify)
@Set<Unify=FootNoteCounter>
We could "unify" the proportional case into the similarity case by
simply regarding the
R(@G(u),x) as the one place predicate of @G(u), P(@G(u)),
which is mapped onto another unary predicate, P'(@G(u)),
which corresponds to R'(@G(u),y).
These x and y serve to define the relations R and R' respectively.
Two other notes:
@BEGIN<ITEM1>
@Cite<Miller> uses a similar description.

SubAppendix @Ref<ReformOther> suggests why these two senses of analogy seem
so similar, by proposing one way of blurring the distinction.
@END<ITEM1>}
@END(DESC1)

@BEGIN(DESC1)
@B<Familial Resemblance>

@i{Given} objects {A@-<i>},

@i{Find} how they are all similar (possibly within some constraint).@*
[This may be regarded as finding a set of unary predicates,
{P@-<i>}, such that P@-<i>(A@-<i>), where these P@-<i>s are all similar.]
@END(DESC1)
@END(Itemize)

@Section(Types of Tasks)
@Label<TaskTypes>

Another obvious dimension (orthogonal to the senses mentioned above)
is the type of task.
There are several types of tasks which require an analogy to be generated.
@BEGIN(ITEMIZE)
Find (@i{i.e.} explain) the analogy

Find an analogue

Judge/Select analogue(s)
@END<ITEMIZE>
Examples of these appear in Section @Ref<AnalQuests>.
In all three cases there may be some @i(a priori) specification which
restricts the allowed answers.  We will say more about this later.

We might (naively) think of these tasks as @i(generating) the analogy.
There are two other types of tasks which use the derived analogy --
taking it as input, along with the pair of analogues.
A known analogy can be used to
@BEGIN(ITEMIZE)
derive (@i{i.e.}, conclusively deduce)

suggest (@i{i.e.}, plausibly conjecture)
@END(ITEMIZE)
a new fact (or conjecture) about B, based on some fact(s) known about A.
These operations may serve to "extend" the analogy --
by explicating more properties which the analogues share.
(See the second note in SubSection @Ref<Primitives>.)

<<HERE: what about the tasks of
combining different analogies?>>

within Explaining analogies, consider "have the same equations"
>>
@Section<Analogy Primitives>
@Label<Primitives>

There are two basic analogy-related operations:
analogy generation and analogy use.@Foot{
There is nothing terribly deep about this generation/comprehension
dichotomy;
it is identical to the split found in natural language programs,
which is often labelled as Language Production versus Language Comprehension.
(@i<c.f.> @Cite<Levy>, p204.)}
The next Section @Ref(AnalQuests) will demonstrate that these two primitives 
span the space of the analogy questions,
by showing the considerable range of questions which combinations of these 
primitives can answer.
The rest of this section will now elaborate these two actions.

The @i{analogy generation process}
takes as input the two analogues,
A and B, augmented with some contraint, @G(a),
and generates a reason, @G(b)@-(AB), why A and B should be considered analogous.
(In general this reason may be consider a mapping of the various parts
of A onto parts of B.)
The @i{analogy understanding process} is able to use these reasons.
It takes the "vehicle" analogue, A, and the reason @G(b)@-(AB),
and conjectures some new (@i{i.e.} previously unrealized) 
property of the "topic analogue", B.
(Actually, it need not be given this B analogue explicitly.
Indeed, one type of task is to determine this B from an intensional analogical
description.  
The next subsection brings up this point again,
as does Dimension @Ref<DeVSRe> in Chapter @Ref(Dimensions).)

A few notes:
@BEGIN(ITEM1)

The triple <A, B, @G(b)@-(AB)> maps fairly nicely in the 
<vehicle, topic, ground> structure discussed in @Cite<Paivio>.  
(Of course many other researchers discuss similar triplets as well.)

@BEGIN(Multiple)
@TAG(ConstraintReason)
Constraints and reasons seem very similar
-- perhaps even interchangable.
For this reason they should probably be expressed in the same language.
(Appendix @Ref(Analogy-Vocab) is a starting vocabulary of such 
constraint/analogy phrases.)
This would mean that what we considered analogy-generation should really
be called analogy-expansion.
That is,
this process really takes starts with a simple, degenerate analogy,
(defined by the starting set of constraint/reasons,)
and uses them to produce a more complete analogy,
which involves adding an additional set of constraints/reasons,
which "closes" that starting set.

This is consonant with the results documented in @Cite<Learn-HR> --
that in learning, people tend to clump together certain sets of features.
This suggests that people have and use certain coherent feature spaces,
rather than just unconnected, individual properties.
@END(Multiple)

As the next section will show 
the "binary-ness" of these operators seems too limiting.
Unless we can simulate the N-ary performance needed to handle the
familial case using only these binary actions,
we may have to extend the analogy generating primitives
allowing each to take N analogues, rather than just two.

@BEGIN(Multiple)
It was misleading to word the first list of tasks
mentioned in Section @Ref(TaskTypes) above as "analogy generating",
and the other two as "analogy using".
Among other problems, this seems to lead to a funny asymmetry --
why are the only analogy understanding tasks involved with similarity analogies,
and not with the two other senses mentioned in the previous Section @Ref(Senses)?

@*
While the division is correct, the tasks have been mis-assigned.
We will see in the next section that each of the latter two types of 
"generating" tasks
(@i{viz.} "Find analogue" and "Judge/Select analogue(s)")
must use the analogy, after it has been generated.
@END(Multiple)
@END(ITEM1)

@Section(Analogy Questions)
@Label(AnalQuests)

This chapter has already mentioned two of the dimension of the space of analogies
-- its distinct senses and its task types.
This section shows the cross product of these axes,
summarized in the table below.
Each entry includes the canonical question for this case, a possible paraphrase
of that question, and one (possibly naive) way of express this in terms of
the two primitives shown above.
Each of the numbers given in the @i{From examples:} attribute of each case below
is an index into the Appendix @Ref<Examples>,
to an analogy which requires this operation.
Note that the most interesting and useful analogy tasks require
both the generation of an analogy connecting two analogues,
and a use of this result.

@BEGIN(Comment)
Note each of the questions is worded as a find or prove query.
We might "invert" the question, to ask, for example,
whether A is like B for reasons @G(b)@-<AB>.
@END(Comment)

@*
@B(NOTATION:)
@BEGIN(ITEM1)
The phrase "within the constraint @G(a)" 
refers to an
@i<a priori> pre-specification for the analogical match.
When the restriction arises because we are trying to match proportionally,
with respect to x, we will write @G(a)@-<x>;
and when both proportional terms are known, we will write @G(a)@-<xy>.
Notes:
@BEGIN(ITEM1)
This restriction will not always be explicitly mentioned in the problem statement
-- that is, it may be purely implicit, left to be inferred by the hearer.

These constraints will also relay
how this analogy will be used --
@i{i.e.} if it is to be used to find a proportional term 
or predict the value of some slot.
(Such information may be heavily used to guide the generation of the analogy.)
@END(ITEM1)

"for reasons @G(b)" indicates the nature of this analogical mapping.@*
@G(b)@-<AB> will usually denote the reasons why A is like B.
@END(ITEM1)

@P(<Tasks which require the analogy to be generated.>)
@BEGENUM1<>

@B(Similarity:)
@BEGENUM1<>
@BEGIN(DESC1)
[@ux<Find analogy>]

Given analogues A and B, and some constraint @G(a), explain how B is like
(or may be like) A.

@i{Algorithm:}
Derive the reasons @G(b)@-<AB> why B is similar to A.@*
(these reasons must be within @G(a)'s specification.)

@i{From examples:} @Ref(Comparison), @Ref(Prediction), @Ref(Literal)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Find analogue>]

Given A and some constraint, @G(a), find (or construct) some B which is like A.

@i{Algorithm:} 
Use the constraint @G(a) to deduce/conjecture suitable properties for this B.@*
Then use this intensional description to find an appropriate B.

@i{From examples:} @Ref(FindOrGen), @Ref(ProblemRestate), @Ref(Literal)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Judge/Select analogue(s)>]

Given A (and some constraint, @G(a),) find the B in {B@-<i>} which is most like A.

@i{Algorithm:}
Derive first the reasons why A is like each B@-<i>, @G(b)@-<Ai>.@*
Then rank these, to find the best B@-<i>.

@i{From examples:} @Ref(Comparison) -- the legal and medical cases
(these seem relatively rare.)
@END(DESC1)
@ENDENUM1<>

@i(Proportional:)
@BEGENUM1<>
@BEGIN(DESC1)
[@ux<Find analogy>]

Explain how "x is to A" in the same manner as "y is to B".@*
(Find a relation which takes both <x,A> and <y,B>.)

@i{Algorithm:}
Find the reasons, @G(b)@-<AB>, why A is like B,@*
subject to the constraint that this is used as a proportional analogy,
@G(a)@-<xy>.

@i{From examples:} @Ref(Literal), @Ref(Proportion)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Find analogue>]

Find a y which is similar to B in the same way x is similar to A.@*
(Given x in some relation to A, find a y which has a similar relation to B.)

@i{Algorithm:}
Find the reasons, @G(b)@-<AB>, why A is like B,
subject to the proportional constraint @G(a)@-<x>.
Use these reasons to deduce/conjecture a new fact (conjecture) about B:@*
the value of the slot which, in A's case, is filled with x.

@i{From examples:} @Ref(Proportion)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Judge/Select analogue(s)>]

Given x in some relation to A, find the y in {y@-<i>} which has
the most similar relation to B.

@i{Algorithm:}
Find a set of reasons, @G(b)@-<Ai>, telling why A is like B,
subject to the proportional constraint @G(a)@-<xy@-{i}>.
Then rank these to find the best y@-<i>.

@i{From examples:} @Ref(Proportion)
@END(DESC1)
@ENDENUM1<>

@i(Familial Relations:)
@BEGENUM1<>
@BEGIN(DESC1)
[@ux<Find analogy>]

Given {A@-<i>}, find how they are all similar.

@i{Algorithm}
??? Think more about this case ???@*
1. Find the most specific unifying reason which explains each
@G(b)@-<ij> = the reason why  A@-<i> is like A@-<j>, 
(given @G(a)@-{?} constraint --
perhaps of all of the A@-<i>s?)@*
2. Sequentially find the reasons A@-<i> is like A@-<j>, @G(b)@-<ij>,
subject to the constaint of the preceeding pair.
The final set of reasons will be the most specific general reasons why
these things are similar.@*
3. This requires recursion: Find the @G(a)@-<ij> as above.
... <<This doesn't seem to work>>

@i{From examples:} @Ref(ExplainNary)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Find analogue>]

Find a new B which should be included in {A@-<i>}.

@i{Algorithm:}
First find the reasons why these {A@-<i>} are similar, @G(b)@-<i>,
subject to the constraint @G(a)@-<B>, that we will want to consider B's
admission to this group. ...@*
<<<THIS DOESN'T WORK!>>>

@i{From examples:} @Ref(Style)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Judge/Select analogue(s)>]

Which B in {B@-<j>} fits best in {A@-<i>}?

@i{Algorithm:}
?

@i{From examples:} @Ref(FuzzyClass)
@END(DESC1)
@ENDENUM1<>
@ENDENUM1<>

@P(<Tasks which use the given analogy.>)
@BEGENUM1<>
@BEGIN(DESC1)
@BEGIN(DESC1)
[@ux<Deduction>]

Given A is like B for reasons @G(b), derive that P'(B) is true, when P(A).

@i{From examples:}
@Ref(Prediction)
@END(DESC1)

@BEGIN(DESC1)
[@ux<Conjecture>]

Given A is like B for reasons @G(b), conjecture that P'(B) is true, when P(A).

@i{From examples:}
@Ref(Prediction)
@END<DESC1>
@END<DESC1>
@ENDENUM1<>

@Chapter(Analogy Applications)
@Label(Applications) 

This is an appropriate place to ask "why" 
-- why should anyone go about generating an analogy;
that is, what advantage does the ability to understand an analogy provide?
Analogies serve (at least) two basic functions:
linguistic and deductive/predictive.
This next section presents a quick summary and explanation
of these two applications.
This is followed by a set of
observations and notes relating to these applications.
(A more direct answer to the "why" questions mentioned above is presented
in Appendix @Ref(WhyAnalogy).)

@Section(Capsule Summary of Applications)
@Label(Sum-Applic)

@BEGIN(Itemize)
@BEGIN(Multiple)
@C{Linguistic}@*
@BEGIN(DESC1)
@B[Motto:] @i[To communicate a lot quickly, using common "ground".]

@B[Scenario:]@\A speaker, S, wants to describe B to the hearer, H.
S does so by telling H that B is like A (for reasons @G<a>).

@B[Preconditions:]
H knows a great deal about B and relatively little about A;
while S knows a lot about both A and B.
(S must also know of H's knowledge of A.)

@B[Purpose:] To quickly relay a bundle of facts about B to the hearer.
(These facts are A's B-ness).

@B[Subcases:]@\Explanation, exegesis, teaching a new idea;@*
elaborating (filling in) an unfamiliar case;@*
efficient storing 
(here H and S may be the same person, but at different times);@*
...

@B[Near miss:]@\If H already knew that both A and B were instances of the
same abstraction, (and hence he would know of "real" nature of commonality as well,)
the deductions implied by this analogy would reduce to straightforward
guaranteed inference. (See note @Ref<DvsP> in Appendix @Ref<Misc>.)

@END(DESC1)
@END(Multiple)

@BEGIN[Multiple]
@C{Reasoning}@*
@BEGIN[DESC1]

@B[Motto:] @i[There is a (probably solid) reason why A is like B.]

@B[Scenario:]@\A reason, R, knows that A and B are similar.
R can then conjecture hypotheses about B, based on corresponding facts about A.@*
[Worded another way: R, knowing that B is like A in some ways,
asks "why not in this other way?".]

@B[Purpose:] To @i{deduce} or @i{conjecture} some assertions about B.

@B[Preconditions:] R knows that A is similar to B, and a body of
facts about A.
In general R knows a lot about B, and a little about A.  
He also has some idea how A is like B.

@B[Subcases:]@\(Each of the cases below can be either deductive or
merely predictive.)@*
Establishing previously unknown facts about A,@*
deciding that A cannot have property X because B does not,@*
realizing that B satisfies some of the equations A satisfied,@*
observing that some subcases of B correspond to known subcases of A,@*
reasoning that anything which is like A should be similar to B,@*
...

@B[Near miss:]
The "strength of the conviction" of the similarity can vary tremendously.
The deductive case, when R has enough facts to positively conclude something
about B, seems only borderline analogy.
It is distinguished from vanilla deduction in that
we insist that
R must actually store the "analogy" pointer from B to A, 
(probably labelled with the reasons,)
rather than simply the results of the deductions performed by virtue of this
connection.
Compare: "Fred eats the same quantity of food as Polly Parrot" versus
"Fred eats 3 pounds of food a day,
where Polly Parrot is also known to eat 3 pounds a day".
(The @G(b)-structures mentioned in @Cite<Merlin> is a simple example of
this.)@Foot{
Notice what this implies about analogies in general:
their function is subsumed 
by simple algorithmic methods (such as inheritance or instantiation)
when enough is known.
Analogizing is usually a largely heuristic method, 
most useful when you lack that "deep knowledge".}]

@END[DESC1]
@END[Multiple]

@END[Itemize]
@Section(Elaboration of Applications)
@Label(AppElab)

The first obvious observation is that someone must be able to
understand the analogy in both cases --
H, the hearer, in the first case, and the reasoner R in the latter.
Only in the linguistic case does anyone have to explicitly generate the analogy 
-- the speaker S has this task.

Let's now consider these applications in more detail.
The Reasoning
case can be considered representational,
as a single reasoner uses an asserted (usually underspecified)
connection to infer new facts about an object.
There are two major subcategories, "Deductive" and "Prediction".
They differ only in how certain R is of his conclusion --
that conclusion may be a valid, well justified deduction,
(deduction)
or merely speculation, depending on how much R really knows of the connection
joining the analogues (prediction).
(Note @Ref(DvsP) in Appendix @Ref(Misc) tells why this Reasoning
application was NOT split into those two categories.)

Understanding this reasoning process is needed to follow the description of
the linguistic sense of analogy.
Note that S and H perform complementary tasks.
In the vocabulary defined in Section @Ref(Primitives),
S is responsible for generating the metaphor,
while H has the task of interpreting this message.
S's task is relatively easy.
He already knows the link between A and B --
that is, the connection between A and B falls out of his representation.
(See Appendix @Ref(Reform).)
Hence, no complicated reformulation process is needed to produce the analogy.
His task is simply to state the nature of this connection to H.@Foot{
This is, of course, an extreme simplification.
The conscientious speaker will consider (his perception of) H's @i{Welterschaung}
when producing an analogy,
being careful to generate
only those analogies which someone with such a background could understand.
Hence S may, indeed, be forced to reformulate his description,
in simulating H's subsequent interpretation.}

Consider now the role of the person on the receiving end of this statement.
For the analogy to work, H must do a great deal of inferencing.@Foot{
@TAG(SigMess)
@Cite<Reddy> provides a nice description of this distribution of labor.
This article employs a cute, illuminating metaphor
to explain how useful metaphors are in communication
(@i<I.e.> its form nicely explicates its content.)
The article also provides a useful description of the distinction
between signal and meaning.
This distinction is essential when discussing
how the pragmatic meaning of an utterance can differ
from its semantic interpretation.
This difference is especially important to tropes such as metaphor.}
Consider @i{electricity resembles fluid flow} example.
On hearing this, H must first look-up his corpus of facts about fluid flow,
then to decide which properties should carry over to the domain of electricity,
and finally he must map these fluid-related features to corresponding features
in the electricity domain.

It is quite surprising how well people do at each of these steps.
The first and third subtasks
(of looking up the facts, and mapping them over,)
seem relatively straightforward, at least from an epistemological perspective.
The second, however, is next to unfathomable.  
How does one decide which features are salient for this analogy?
Why is it obvious that electricity is not wet,
but that its quantity should, like the fluids, be conserved across a junction?
In the @i{people are like computers} analogy,
why is it reasonable to conclude that a computer should be able to
perform symbolic deductions,
but unreasonable to assume that they are composed of neurons?
This issue, of deciding which parts to place in correspondence,
is the crux of intelligent analogizing.

Let's examine why is this second subtask is so complicated.
First, these particular "carry-overable" properties may not have been present 
in H's initial representation of fluid flow.
H would then have need to reformulate this stating representation,
into a form which explicates these desired properties.
The next issue is uncertainty:
in general H can only guess what properties S meant to communicate.
Hence he is conjecturing some fact about B, rather than deducing it.@Foot{
Of course, S might have relayed enough about the 
connection between A to B that H can logically (@i<i.e.> conclusively)
infer B's new properties.
This is quite rare:
One important reason S was using a metaphor in the first place
was efficiency;
as it required fewer words to transmit the same message.
It seems self-defeating to spend those saved words by giving
the long, detailed description usually required to render the meaning
conclusively inferable.}

This chapter concludes with two additional notes, both on the nature of S and H.
First, both the speaker and the hearer may be the same individual.
One application of this is efficient storage --
@i(a` la) @Cite[Merlin]'s @G(b)-structures.
Here S/H can "bundle up" various facts about A by succinctly asserting that
"A is like B",
confident that he will later be able disentangle this structure to tease out the
facts he needs about A.

Secondly,
analogy-like behavior can occur between a pair of machines,
as well as between humans.
The only requirement is that the same basic purpose be served
-- relatively few bits of information must be used
to transmit a complex concept.
This is only possible if the recipient
can actually infer (or guess) the additional facts needed
to flesh out the bundle of facts actually sent.
(This is an example of the Signal/Message issue mentioned in footnote 
@Ref(SigMess).)
(Note we are here using an abstraction of the idea of analogy --
@i{i.e.}, we are using an analogous definition of analogy,
attained by stretching/extending the standard definition of analogy.
See Example @Ref<Borrow>.)

@Chapter("Dimensions" of Analogy)
@Label(Dimensions)

The previous chapters mentioned various ways of cataloguing an analogy,
based on various features,
som those inherent to the analogy itself,
and others to the situation/problem in which it was posed.
Ths chapter will enumeration a larger set of dimensions which can be used
to discriminate among different analogies.
The first section will summarize the dimensions which were
suggested in earlier chapters of this report.
The next two will proffer additional axes,
relating (respectively) to the analogy @i(per se),
and to the problem statement surrounding it.
There are also several dimensions specific to the metaphoric use of analogy;
these are listed in the final section.

@BEGIN(Enumerate)
@Section(Dimensions Already Covered)

@BEGIN(Multiple)
@Tag(Threesenses)
@B(Senses of Analogy)@*
There are three distinguishable types of analogy:
similarity, proportional, and familial.
While all seem to use the same type of underlying mechanism, 
each uses the result a little differently.
(See Section @Ref(TaskTypes).)
Examples:
@BEGIN(ITEM1)
@i(Similarity:)@\Cattle are like sheep.

@i(Proportional:)@\Cow:Calf :: Ewe:Lamb

@i(Familial:)@\All farm animals are similar.
@END(ITEM1)
@END(Multiple)

@BEGIN(Multiple)
@Tag(Tasks)
@B(Analogy Tasks)@*
There are several basic types of analogy tasks,
discussed in Section @Ref(TaskTypes).
@BEGIN(ITEM1)
Find (@i{i.e.} explain) the analogy

Find an analogue

Judge/Select analogue(s)

Use the analogy -- to deduce/conjecture new facts about one of the analogues.
or to form an extended analogy
<<here>>
@END(ITEM1)
@END(Multiple)

@BEGIN(Multiple)
@Tag(Functions)
@B(Analogy Functions)@*
As discussed in Section @Ref(Applications),
there are two basic reasons an analogy may be used:
@BEGIN(ITEM1)
Linguistic

Representative
@END(ITEM1)
@END(Multiple)

@BEGIN(Multiple)
@Tag(Conclusive?)
@B(Conclusiveness of Derivation)@*
There are two basic ways one can milk a particular analogical connection,
depending on the strength of the conclusion derived.
It can either be
@BEGIN(ITEM1)
a solid, provably valid deduction

a nice, plausible conjecture.
@END(ITEM1)
This dichotomy clearly holds only for the @i{use of an analogy} task.
This split shows the endpoints of
the full spectrum of "plausibilities",
bounded with "valid deduction" at one end and with "random guess" at the other.
(See Section @Ref(AppElab).)
@END(Multiple)

@Section(Other Dimensions)
@BEGIN(Multiple)
@TAG(MvsI)
@B(Model @i{vs} Instance)@*
An analogy can map from either an abstract model or instance,
to either an different model or an instance.
The list below summarizes the four cases:
@BEGIN(ITEM1)
@i{Model to Model}: Electricity is like Water Flow.
(See Item @Ref<MM> in Appendix @Ref<Examples>.)

@i{Instance to Instance}:
Generating a new program, modelled after an earlier one.@Foot{
In his @Cite<SimonFriends> lecture,
Simon pointed out that human experts are familiar with between 
50 and 100 thousand particular instances or examples of their domain of expertise.
Understanding some new phenomenon involves
matching that new instances against one of these "friends".
This instance to instance pattern matching works so nicely because
things in a domain are basically similar to one another.
(That is probably why this body of examples is considered a field.)}@*
These mappings are often called learning from example(s).
(See Item @Ref(II).)

@i{Instance to Model}: <These have the basic flavor of induction.>

@i{Model to Instance}: <These seems simple instantiation.>
@END(ITEM1)

While there can be the same looseness of fit associated with analogies in general,
these latter two cases 
(instance to model and model to instance)
seem different from analogy 
-- that is, the complexities of answering these questions
are NOT of the "what does it mean to say X is like Y" variety.

We have (intentionally) not defined model, nor indicated precisely how it
differs from an instance.
Intuitively, an instance is a "ground case",
consisting exclusively of fully specified terms,
as opposed to quantified variables and other manners of intensional objects.
This definition readily leads to a full continuum of models,
ordered by number and nature of variables.
The mappings which seem most like analogies are where the "model-ness" of the
analogues are roughly the same.  Hence Fred might be comparable with George,
or "typical man" with "typical zebra", but matching Fred with "typical zebra"
seems strange -- that is, not an instance of an analogical mapping.
(This point emerged briefly in Example @Ref(Prediction) above.)
@END(Multiple)

@BEGIN(Multiple)
@B(Degree of specificity)@*
The analogues involved in an analogy can have varying degrees of specificity.
Below we see the progression
from metaphor to simile to equation.
(Each of these counts as an acceptable analogy.)
@BEGIN(ITEM1)
People are birds.

People are like birds.

John is like a bird.

John eats like a bird.

John eats as much as a bird.

John eats as much as a small, full bird.

John eats as many sun-flower seeds as most birds eat.

John ate as many sun-flower seeds on June 24 as Polly parrot ate that day.
@END(ITEM1)
This same variance also holds for non-similarity analogies.
While this axis seems similar to the @i{Model versus Instance} case,
(@Ref<MvsI> above,) it is not.  
A model can have an arbitrary degree of specificity
and still be a model and not an instance --
comparing "Flow of 42@K{degree} Water over a Dam" with
"10 volts of current from a battery" is still a model-model comparison,
even though it is much more specified than
the "Water Flow" to "Electricity" case.
Similarly we might have a very general instance which is still regarded as an
instance, and not a model.
@END(Multiple)

@BEGIN(Multiple)
@Tag(OpenP)
@B(Openness @i{vs} closeness)@*
Some analogy connections seem quite bounded 
-- @i{i.e.}, the analogy can be used to answer but a single particular question.
Compare
@BEGIN(ITEM1)
@i(Closed:) Cow:Calf :: Ewe:@i{?}@*
What could possibly fill @i{?} but lamb;
and what else can one do with this analogy?

@i(Open:) Cognitively, people are like computers.@*
Many assertions can be generated from this comparison --
the usefulness of this analogy does NOT end once one fact about computers has
been transfered to people.
@END(ITEM1)
(This point is discussed in
@Cite<Boyd> p363-372, who talks about "inductive Open-Endedness", and
the description of "openness to explication" given in
@Cite<Pylyshyn> p430-431.
@END(Multiple)

@BEGIN(Multiple)
@B(Uni-directional @i{vs} Bi-directional)@*
This point is similar to Dimension @Ref(OpenP),
in considering how the analogy can be used.
Some analogies map only from the vehicle to the topic,
while others can be considered two way mappings
-- facts known about either analogue can be carried over to apply the other.
@BEGIN(ITEM1)
@i(Uni-directional:) John is a pig.@*
Note this describes John -- one's definition of pig is totally unaffected
by John's behaviour/weight/consideration, ...
(See @Cite<Searle>.)

@i(Bi-direction:)  Genes are like chromosones.@*
As @Cite<Interfield> points out,
once the practicioners of both domains (genetics and biochemistry)
realized this interfield connection,
both groups were able to use this analogy to conjecture new facts.
@END(ITEM1)
@END(Multiple)

@BEGIN(Multiple)
@Tag(Seren)
@B(Serendipity @i{vs} Causally connected)@*
This is one of the major issues involved with any study of analogy --
whether the analogy @i{really} has a physical interpretation/reality,
or is just serendipity.  
This is clearly closely tied in with the openness issue raised in 
Dimension @Ref(OpenP).
When there is a meaningful correspondence between the pair of analogues
(@i{e.g.}, when they are "two perspectives of the same physical object" or 
represent "two models whose behaviour is dictated by the same set of equations"),
an entire class of facts about one analogue MUST correspond to facts of the other.

In the other case the "analogy" may simply be full
of (pleasant?) coincidences.
@BEGIN(ITEM1)
@i(Serendipity:) Speech is @i(laced) with metaphors.@*
Most of the corresponding features, (usually found @i(a posteriori)),
will be just curious coincidences.
One might regard a document as sickeningly sweet if it contains too much lacing;
we might similarly be distracted from the main part if there is too many,
different laces;
or ...@Foot{
Different senses of the same word may or may not be in this category.
@Cite<Lakoff> would point to the 
classes of terms which collectively are transfered, systematically, 
from one domain to another, to argue that these are more than coincidental
correlations
in most cases.}

@i(Causally connected:) Genes are like chromosomes.@*
A genes really is a @i{part of} a chromosome.
@END(ITEM1)
@Cite<Boyd>'s description of cutting "nature at its joints"
is closely related to this.
@END(Multiple)

@B[Define @i{vs} Refine]
@TAG<DeVSRe>
The goal of some analogies is to
elaborate a description of some known analogue,
that is, to state that some new property holds for that alreadly known B.
The goal of other analogies is to determine that topic --
@i{i.e.}, to find (or generate) some B which satisfies some set of 
(analogically defined) properties.
Compare
@BEGIN<ITEM1>
@i<Define:> Find a command which is similar to C-F, but deals with words rather
than characters.

@i<Refine:> Given that M-F is like C-F,
(but deals with words rather than characters,)
explain what the M-F command does.
@END<ITEM1>
@Section(Dimensions of Analogy PROBLEM)
@Comment{These dimensions deal with the problem statement,
not with the analogy itself.}

@BEGIN(Multiple)
@B<Obscure, strained @i{vs} "obvious", direct, natural>@*
This is related to the Dimension @Ref(Seren) case above 
-- analogies which are based on meaningful connections will 
(or at least should) seem natural;
while those based on cutenesses will seem strained.
(Of course the meaningfulness of a connection will depend
on the reasoner's background,
and there may be a strong cultural bias towards one type of connection
and away from others.)

This dimension also deals with how immediately obvious the analogy is;
that is, on whether an explanation is required.
An obvious metaphor can be understood with no additional information.
(Again, this only applies to hearers suitably familiar with the 
speaker's background.)
Another way to look at this involves examining the features
which must be mapped from one analogue to the other.
When those features are known to be salient (with respect to the vehicle),
the analogy will be easy to understand;
otherwise the analogy will seem non-trivial,
and require an elaborating explanation to be understood.
@BEGIN(DESC1)
@i{Obscure:}@\John is a zebra.@*
"cow's lamb"

@i{Obvious:}@\ John is a packrat.@*
"laced speech"
@END(DESC1)
@END(Multiple)

@BEGIN(Multiple)
@Tag(Explicit)
@B(Explicitness of difference)@*
In some simple cases, one can find map from one analogue to the other
by simply substituting one value of a parameter for another.
@BEGIN(DESC1)
@i(Explicit:) "@i(5p) is like @i(3p),
except it moves the cursor to the fifth page, not the third".@*
(Taken from the domain of editor commands.)

@i(Implicit:) Many of the equations for water flow
can be used to describe electrical circuits.@*
Here the "parameters" which distinguish these two cases are 
by no means obvious.
@END(DESC1)

Of course the best any computer program (or person?)
can do is find syntactic matches.
While this ability facilitates parameter substitution,
it is hard to see
how this could handle anything beyond the most superficial of analogies.

Realize this "explicitness" measure is strictly an artifact of the
representation used for the problem.
This research is strongly based on the assumption that any analogy can
be described as a modification of some parameter (or something isomorphic to
that in predicate calculus), @B(when the problem is stated in the correct
representation).
Appendix @Ref(Reform) elaborates this point, as does @Cite[ThesisProp].
@END(Multiple)

@BEGIN(Multiple)
@B<Refined/precise @i{vs} sloppy>@*
This is related to the above Dimension @Ref(Explicit) case,
and deals with the nature of the problem statement.
(Consider the case when there are more than one possible analogy joining two
analogues.  
A precise description distinguishes one of these as being the most relevant.)
Compare
@BEGIN<DESC1>
@i{Precise}: John and Fred have similar study habits and natural abilities.@*
(Hence their respective grades on the same course should be similar.)

@i{Sloppy}: Text editors are like text processors.@*
... because "both are computer programs", or "both deal with words", or
"there are no good examples of either", or ...
@END<DESC1>
(This point seems similar to @Cite[Gentner]'s distinction between the
@i{high-clarity} analogies characteristic of science and the @i{high-richness}
analogies usually found in literature.@Foot{
I am indebted to Steve Tappel for pointing this out.})
@END(Multiple)

@BEGIN(Multiple)
@B(Bounded @i{vs} Unrestricted)@*
This is also closely tied with the constraints presented in the problem statement,
on what will qualify as an acceptable analogue 
(or, in general, an acceptable analogy).
Consider the proportional case of 
@QUOTATION(circle : square :: sphere : @i{?}.)
Compare the cases when
@BEGIN(ITEM1)
@i{Bounded:} @i{?} @G(e) {line, tetrahedron, octogon, <x,y,z>, "90 degrees"}@*
[@i{?} = Tetrahedron, as it is, like a square, a regular figure, with four sides...]

@i(Unrestricted:)  When @i{?} can be anything.
Here, for example, @i{?} might be cube.
@END(ITEM1)
@END(Multiple)

@BEGIN(Multiple)
@B<Int@u{er}Field @i{vs} Int@u{ra}Field>@*
This is perhaps the fuzziest (and least useful) of the dimensions mentioned
here.  
The issue is whether the two analogues belong to the same domain or not.
Compare
@BEGIN(DESC1)
@i{Same domain:} "3p" is like "5p", except ...@*
(Both are editor commands.)

@i{Different domains:} Text editors are like secretaries, except ...@*
Text editors and secretaries are totally different types of entities;
similar only in some aspects of one of the roles both serve.
@END(DESC1)
@END(Multiple)

@Section(Dimensions of Metaphoric use of Analogy/PROBLEM)
@Comment{These dimensions pertain only to the use of analogy for communication.
(Outlined above in Section @Ref(Sum-Applic).)}

@BEGIN(Multiple)
@B{Interactive Communication?}@*
Talking to a responsive listener is quite different from
writing to a future, unseen reader.
In the first case the speaker can be more laconic,
knowing the hearer will interrupt and ask for elaboration if too much is omitted
-- a liberty not accorded the future reader of a written document.
This distinction is especially pronounced when dealing with
a linguistic form which requires the vast amount of interpretation
on the part of the hearer that analogy does.
@END(Multiple)

@BEGIN(Multiple)
@TAG(GenVsFind)
@B{Generate @i{vs} Find}@*
The dichotomy is most noticable when searching for a suitable analogue.
Prior to this inquiry, the desired analogue may or may not exist.
@BEGIN<DESC1>
@i{Find:}@\@Cite[M&Ua], for example, search for an existant program
which satisfies certain specifications.
(This is the basis on which the new program is CONSed up.)

@i{Generate:}@\Solving @Cite[Polya1]'s
"find the shortest distance between two points on the same side of a line" problem,
involved generating a new, previously unseen problem --
find the shortest distance between two points on the opposite sides of a line.
@END<DESC1>

We could similarly split the search for an analogy into two camps,
depending on whether a "new" analogy is to be generated,
or an existing one merely realized.
People have an intuition that certain analogies are well-known,
and existent prior to this current conversations 
while others are simply generated for this particular example.
For example, there are many connections which arise once
one considers speech to be @i{laced} with X --
but there seems no reason to assume these "existed" prior to the
initial statement of the analogy.

This case is clearly similar to the Dimension @Ref(Seren) case above,
as the serendipidious cases are (at best) being CONSed up,
while the analogies which correspond to some "real" causal connection
feel pre-existent
-- corresponding to facts nature has provided.)
(We mention this point again in SubAppendix @Ref(WhyReform).)
@END<Multiple>

@BEGIN(Multiple)
@B{Nature of Features Mapped Over}@*
As noted in Section @Ref(AnalQuests),
different analogies are designed to answer different questions.
This dimension attempts to quantify what must be mapped 
from one analogue to the other to answer the implied question.

In the proportional case, there is basically a single feature mapped over --
(a possibly reified version of) the proportional object.
Also, in most literary analogies, but a single salient feature is mapped.
For prediction, one first observes some of a set of causally
related features appearing in both analogues, and then proceeds to
conjecture the remaining members of these features.
Explanation (@i{e.g.}, for comparisons) can vary 
-- usually many features must be mapped over to establish the connection.

Note in most cases additional analogical connections can then be noticed
or asserted, after the initial connection has been made.
Here, however, we are dealing only with that starting connection;
the particular analogical connection which the inquiry itself required.
@END(Multiple)

@END(Enumerate)
@Chapter(Properties of Analogy)
@Label(Properties)

This chapter will discuss some of the obvious properties of analogy,
emphasizing those which have not been covered earlier in this report.
Any analogizing program which aspires to achieve a human-like performance
(which is, we conjecture, a necessary prerequisite for being useful,)
should be based on a model which exhibits these characteristics.

@BEGIN(Enumerate)

@Tag(Subjective)
@i{Subjective}@*
Metaphors and analogies are very subjective -- that is, different people
will generate different "answers" to the same analogy questions
(@i{i.e.}, diverse analogues,
or different explanations of how X and Y are analogous, @i{etc}.)
Of course anyone should be able to understand any given analogy,
once it has been explained.
(Consider how often a person, on hearing the explanation of a metaphor, remarks
"Oh yea, I wouldn't have thought of it, but now that you mention it...")
Examples of this are provided in Appendix @Ref<Multiplicity>.

@i(Context Dependent)@*
@Tag(ContextDependent)
Analogies are also quite context dependent 
-- not only will different people propose different metaphors in the same situation,
but even the same person will find other "connections" in different contexts.
(@Cite<Rumelhart> and @Cite<Searle> each gloss over this point.
@Cite<Black2>, p39, refers to this context as a "frame".)
We return to this point in Appendix @Ref(Multiplicity).

@i<Spontaneous and Easy to Generate>@*
@Tag(EasyGen)
Analogies are sufficiently easy to produce that people
often generate them spontaneously, often without conscious relection.
This is true about analogies at all levels,
from linguistic metaphors up through the level of concepts.
Consider the frequency of our "... that reminds me of ..." experiences.
Further proof comes from realizing how prevalent various metaphor-like
tropes are -- ranging in explicitness from obvious comparisons
to similes to metaphors.
(We all know that some analogies are harder to generate than others.
Perhaps the ones which are found to be most useful will be the ones
which have not yet been fully exploited and incorporated by our culture,
and hence would be more difficult to produce.  Who knows?)

@i(Easy to Recognize)@*
@Tag(EasyRec)
As @Cite<GEB> commented, people will notice almost any prominent similarity,
even when they are NOT looking for it.
(In that book, there is a page in which a particular set of words appears 
repeated several times
in corresponding positions in adjacent lines,
thereby lining up in columns.
Sure enough, even though no reader was actively expecting that, few would
miss this obvious repetition.)

@i(Hard to Explain)@*
@Tag(Inexplicable)
Even after we have "understood" an analogy,
it is often difficult to explain our reasons.
Seeing that there is a connection seems much easier than describing
what exactly this link is based on.
(This seems especially true for proportional analogies --
where the connection is already known to be via the proportional parts.)
(See @Cite<Darden>, @Cite<Schon>, p260, and @Cite<Boyd>, p357.)@Foot{
This may be simply an interesting property of consciousness.
For example, we might theorize an elaborate "spreading activation" type of
mechanism might be responsible for generating and testing a variety of
matches in parallel.  
If and when any of these exceeds some threshold, 
we feel that those objects are indeed analogous.  
The actual connection may not have been recorded 
(being rather space-expensive, and seldom used)
-- and a conscious effort is required to "reconstruct" the connection.
([Dietterich, @i<personal communication>]).}

@BEGIN(Multiple)
@i<Asymmetric>@*
@Tag(Asymetric)
Analogies, as used, are often not symmetric.
That is "A is like B" will often means something different from "B is like A".
This may be because A will usually have a different set of 
@i{a priori} salient features (and/or a different ranking of these features)
than B does (according to the reasoner).
(@Cite<Miller>, p217 discusses this point, as does @Cite<Ortony1>.)
Compare
@BEGIN(ITEM1)
Rock climbing is like walking.@*
[@i{i.e.} Rock Climbing is trivial, quickly and thoroughly learned, ...]

Walking is like Rock climbing.@*
[@i{i.e.} walking requires balance, and is done by moving various appendages, ...]
@END(ITEM1)

@Comment<(Or try
@QUOTATION(Rock climbing is like the American economy.)
both ways.
[One proceeds cautiously and tentatively, ready to back up ...])>

Realize the full analogy relation is still symmetric --
@i{A is like B for reason @G(a)} is still equivalent to
@i{B is like A for reason @G(a)}.
This asymmetry pertains to the @u{most apt} analogy.  
That is, the most apparent reason why @i{A is like B} may be @G(a)@-[A],
which may differ from @G(a)@-[B], the most apparent reason why
@i{B is like A}.
@END(Multiple)

@BEGIN(Multiple)
@Tag(NotXferable)
@i(Not Transferable)@*
Analogies are not transitive, probably for same basic reasons mentioned above:
a different set of salient features may be used for each analogical connection.
Consider
@BEGIN(DESC1)
Lute playing is like mythology.@*
[@i{I.e.}, both had been quite popular and widely practiced,
but both have been obsoleted, ...]

Mythology is like science.@*
[@i{I.e.}, both served to answer certain teological questions,
(like why are we here,) provide a code of ethics, define a community, ...]
@END(DESC1)
Taken together, these still do not imply that
@QUOTATION(Lute playing is like science.)
In particular, the reason justifying that connection will not follow from the
reasons which joined those other connections above.
@END(Multiple)

@END(Enumerate)
@Chapter(Conclusion)
@Label(Conclusion)

The dimensions and properties of analogies,
listed in the previous two chapters,
along with the miscellaneous examples given earlier,
provide a first step towards answering the question "@ux{what} is an analogy".
In the process, we touched on the issues of
@ux{why} an analogy should be used,
and @ux{by whom} and @ux{when} (@i{i.e.}, in what situations) --
all were addressed in the applications chapter, Chapter @Ref(Applications).
Of the reporter's question list, this leaves only the
@ux{where} inquiry, (which is meaningless here), and @ux{how}.

This @ux{how} question is quite relevant to any AI task.
It is, however, left for subsequent research,
particularly @Cite[ThesisProp].
While this report totally avoids the issue of how to build an analogizer,
it does provide some preliminary thoughts on what an analogizer 
(or more likely, an army of diverse specialized analogy programs)
should be able to do.
For example, Section @Ref(Primitives) demonstrated that one needs but
two types of operations to perform any analogy related task.
(If only it told how to construct those two operators...)

This analysis also helped elucidate issues which still need to be addressed.
For example, it demonstrated the need to develop a language in which
to define the constraints and reasons associated with an analogy.
(See Sections @Ref(Primitives) and @Ref(AnalQuests).)
The vocabulary in Appendix @Ref(Analogy-Vocab) is a start in that direction.

In my particular case,
this analysis has helped me to decide what type of analogizer
I should construct, by suggesting which tasks are the @i{leaves}
of a dependency tree of analogy operations.
It has also led to a rich and diverse body of examples --
good test cases for this, and subsequent analogizing programs.

@NewPage @MAJORHEADING<Acknowledgements>

Many people contributed useful and constructive comments to this survey,
and to this research project in general.
Many seminal ideas began in conversations with 
Steve Tappel, Tom Dietterich, and
Professors Doug Lenat, 
Bruce Buchanan, and Mike Genesereth,
I also want to thank 
Barbara Hayes-Roth,
...
for useful critiques made on various drafts of this paper.
Of course these people are responsible only for the good ideas;
all of the misconceptions can be attributed to a combination of
my misunderstandings, and inherent ineptitudes.
This research was supported by the EURISKO grant from ONL;
and by Core Research funds provided by NLM and ARPA.

@Comment{
People to read this:

	0th pass:
⊗ Jaime Carbonell, on Friday, 2/IV?

	1st pass:
⊗ Lindley Darden - to SUMEX, 1/May
? Tom Dietterich	[he copied, to read eventually]
⊗ Tom Pressburger - mailed, 3/May	[he copied, to read eventually]
⊗ Ann Gardner	 - mail box, 3/May
⊗ Coleen Crangle - Mailed (campus) 4-May
? Paul Cohen
⊗ Nancy Foster	- handed 4-May
⊗ TW		- handed, 3/May
* Barbara Hayes-Roth	- handed, 3/May  [she just skimmed]
⊗ SKLEIN	- to ISIB, 3/May
+ Pat Schoolery	- (told her about it on 3/May)
⊗ Jock		- on desk, 3/May
⊗ Steve Tappel	- contact 9-May, gave 10-May
⊗ Doug Hofstadter - gave 10-May

	2nd pass
  MRG (gave Appendix B, 3/May)
  DBL
  BGB
  Ron Brachman, Hector Levesque, Peter Hart

	others?
  Bill Clancey

----
"?" means asked, no response
"+" means (s)he wanted to read it
"⊗" means (s)he has a copy
"*" means I have comments from them
}